Neural Network Training in Distribution Computing Method
نویسندگان
چکیده
منابع مشابه
A conjugate gradient based method for Decision Neural Network training
Decision Neural Network is a new approach for solving multi-objective decision-making problems based on artificial neural networks. Using inaccurate evaluation data, network training has improved and the number of educational data sets has decreased. The available training method is based on the gradient decent method (BP). One of its limitations is related to its convergence speed. Therefore,...
متن کاملReservoir computing approaches to recurrent neural network training
Echo State Networks and Liquid State Machines introduced a new paradigm in artificial recurrent neural network (RNN) training, where an RNN (the reservoir) is generated randomly and only a readout is trained. The paradigm, becoming known as reservoir computing, greatly facilitated the practical application of RNNs and outperformed classical fully trained RNNs in many tasks. It has lately become...
متن کاملA Differential Evolution and Spatial Distribution based Local Search for Training Fuzzy Wavelet Neural Network
Abstract Many parameter-tuning algorithms have been proposed for training Fuzzy Wavelet Neural Networks (FWNNs). Absence of appropriate structure, convergence to local optima and low speed in learning algorithms are deficiencies of FWNNs in previous studies. In this paper, a Memetic Algorithm (MA) is introduced to train FWNN for addressing aforementioned learning lacks. Differential Evolution...
متن کاملAssessment of Different Training Methods in an Artificial Neural Network to Calculate 2D Dose Distribution in Radiotherapy
Introduction: Treatment planning is the most important part of treatment. One of the important entries into treatment planning systems is the beam dose distribution data which maybe typically measured or calculated in a long time. This study aimed at shortening the time of dose calculations using artificial neural network (ANN) and finding the best method of training t...
متن کاملA Global Optimization Method for Neural Network Training
In this paper, we present a new supervised learning method called NOVEL (Nonlinear Optimization Via External Lead) for training feed-forward neural networks. In general, such learning can be considered as a nonlinear global optimization problem in which the goal is to minimize the nonlinear error function that spans the space of weights. NOVEL is a trajectorybased nonlinear optimization method ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Physics: Conference Series
سال: 2021
ISSN: 1742-6588,1742-6596
DOI: 10.1088/1742-6596/1802/3/032056